7 research outputs found

    Do airstream mechanisms influence tongue movement paths?

    Get PDF
    Velar consonants often show an elliptical pattern of tongue movement in symmetrical vowel contexts, but the forces responsible for this remain unclear. We here consider the role of overpressure (increased intraoral air pressure) behind the constriction by examining how movement patterns are modified when speakers change from an egressive to ingressive airstream. Tongue movement and respiratory data were obtained from 3 speakers. The two airstream conditions were additionally combined with two levels of speech volume. The results showed consistent reductions in forward tongue movement during consonant closure in the ingressive conditions. Thus, overpressure behind the constriction may partly determine preferred movement patterns, but it cannot be the only influence since forward movement during closure is usually reduced but not eliminated in ingressive speech

    The stimulus as a basis for audiovisual integration

    No full text
    We argue here that examination of the stimulus source is a prerequisite to understanding how audiovisual (AV) stimuli are processed perceptually. This is based on mounting evidence that the act of speech production generates multimodal events whose audible and visible components are highly correlated with each other and the vocal tract source. How this multimodal structuring is exploited perceptually, or not, needs to be demonstrated by conducting studies that take the properties of the stimulus source into account

    Temporal control and compensation for perturbed voicing feedback

    No full text
    Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise, what was manipulated is spectral information about the filter function of the vocal tract. However, segments can be contrasted by parameters other than spectral configuration. It is possible that the feedback system monitors phonation timing in the way it does spectral information. This study examined whether talkers exhibit a compensatory behavior when manipulating information about voicing. When talkers received feedback of the cognate of the intended voicing category (saying “tipper” while hearing “dipper” or vice versa), they changed the voice onset time and in some cases the following vowel

    Analysis of facial motion patterns during speech using a matrix factorization algorithm

    No full text
    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records
    corecore